enhancing analogical reasoning
Enhancing Analogical Reasoning in the Abstraction and Reasoning Corpus via Model-Based RL
Lee, Jihwan, Sim, Woochang, Kim, Sejin, Kim, Sundong
This paper demonstrates that model-based reinforcement learning (model-based RL) is a suitable approach for the task of analogical reasoning. We hypothesize that model-based RL can solve analogical reasoning tasks more efficiently through the creation of internal models. To test this, we compared DreamerV3, a model-based RL method, with Proximal Policy Optimization, a model-free RL method, on the Abstraction and Reasoning Corpus (ARC) tasks. Our results indicate that model-based RL not only outperforms model-free RL in learning and generalizing from single tasks but also shows significant advantages in reasoning across similar tasks.
abstraction and reasoning corpus, artificial intelligence, enhancing analogical reasoning, (1 more...)
2408.14855
Technology: Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.80)